ai disaster
What I Learned Watching a Humanoid Robot Do Laundry
Welcome back to, TIME's new twice-weekly newsletter about AI. If you're reading this in your browser, why not subscribe to have the next one delivered straight to your inbox? This summer, I found myself in the strange position of watching a humanoid robot try to load laundry. It squatted beside a washer-dryer unit, reached with one hand into a laundry basket that it was holding with the other, and put some clothes into the drum. But twice in a row, it dropped a piece of clothing and couldn't pick it back up. An engineer with a litter-picker grabbed the fallen cloth and sheepishly moved it behind the machine, out of my line of sight.
- Information Technology (0.72)
- Health & Medicine (0.51)
The U.K. Lacks the Ability to Respond to AI Disasters, New Report Warns
Welcome back to, TIME's new twice-weekly newsletter about AI. If you're reading this in your browser, why not subscribe to have the next one delivered straight to your inbox? A major AI-enabled disaster is becoming increasingly likely as AI capabilities advance. But a new report from a London-based think tank warns that the British government does not have the emergency powers necessary to respond to AI-enabled disasters like the disruption of critical infrastructure or a terrorist attack. The U.K. must give its officials new powers including being able to compel tech companies to share information and restrict public access to their AI models in an emergency, argues the report, which was shared exclusively with TIME ahead of its publication on Tuesday by the Centre for Long-Term Resilience (CLTR).
- Europe > United Kingdom (1.00)
- North America > United States > California (0.05)
Senate warned of 'perfect storm' leading to emerging AI disaster: 'Democracy itself is threatened'
Senators on Tuesday got the green light to impose significant federal regulation on artificial intelligence systems, not just from two industry giants, but from an AI expert who warned that the fate of the nation may depend on tough AI rules from Congress. A Senate Judiciary subcommittee heard from OpenAI CEO Sam Altman and IBM Chief Privacy & Trust Officer Christina Montgomery, who both invited federal oversight of AI even though they split on whether a new federal agency is needed. In between those witnesses sat Gary Marcus, the New York University professor emeritus and leader of Uber's AI labs from 2016 to 2017, who issued a stark warning that human life is about to be upended by this unpredictable technology. "They can and will create persuasive lies at a scale humanity has never seen before," Marcus warned of generative AI systems. "Outsiders will use them to affect our elections, insiders to manipulate our markets and our political systems. Marcus warned that AI systems that do severe damage to humans' trust in each other have already been released and that the damage is already mounting. Gary Marcus, professor emeritus at New York University, speaks during a Senate Judiciary subcommittee hearing in Washington, D.C., on Tuesday, May 16, 2023. "A law professor, for example, was accused by a chatbot of sexual harassment.
- North America > United States > New York (0.48)
- North America > United States > District of Columbia > Washington (0.28)
- Law > Statutes (1.00)
- Government > Regional Government > North America Government > United States Government (0.39)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.35)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.63)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.63)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.49)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.44)
Modelling the Safety and Surveillance of the AI Race
Han, The Anh, Pereira, Luis Moniz, Santos, Francisco C., Lenaerts, Tom
Innovation, creativity, and competition are some of the fundamental underlying forces driving the advances in Artificial Intelligence (AI). This race for technological supremacy creates a complex ecology of choices that may lead to negative consequences, in particular, when ethical and safety procedures are underestimated or even ignored. Here we resort to a novel game theoretical framework to describe the ongoing AI bidding war, also allowing for the identification of procedures on how to influence this race to achieve desirable outcomes. By exploring the similarities between the ongoing competition in AI and evolutionary systems, we show that the timelines in which AI supremacy can be achieved play a crucial role for the evolution of safety prone behaviour and whether influencing procedures are required. When this supremacy can be achieved in a short term (near AI), the significant advantage gained from winning a race leads to the dominance of those who completely ignore the safety precautions to gain extra speed, rendering of the presence of reciprocal behavior irrelevant. On the other hand, when such a supremacy is a distant future, reciprocating on others' safety behaviour provides in itself an efficient solution, even when monitoring of unsafe development is hard. Our results suggest under what conditions AI safety behaviour requires additional supporting procedures and provide a basic framework to model them.
- North America > Canada > Quebec > Montreal (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > Portugal > Lisbon > Lisbon (0.04)
- (9 more...)